首页> 外文OA文献 >Precise deep neural network computation on imprecise low-power analog hardware
【2h】

Precise deep neural network computation on imprecise low-power analog hardware

机译:不精确的低功耗模拟硬件上的精确深度神经网络计算

摘要

There is an urgent need for compact, fast, and power-efficient hardware implementations of state-of-the-art artificial intelligence. Here we propose a power-efficient approach for real-time inference, in which deep neural networks (DNNs) are implemented through low-power analog circuits. Although analog implementations can be extremely compact, they have been largely supplanted by digital designs, partly because of device mismatch effects due to fabrication. We propose a framework that exploits the power of Deep Learning to compensate for this mismatch by incorporating the measured variations of the devices as constraints in the DNN training process. This eliminates the use of mismatch minimization strategies such as the use of very large transistors, and allows circuit complexity and power-consumption to be reduced to a minimum. Our results, based on large-scale simulations as well as a prototype VLSI chip implementation indicate at least a 3-fold improvement of processing efficiency over current digital implementations.
机译:迫切需要最先进的人工智能的紧​​凑,快速和省电的硬件实现。在这里,我们提出了一种高效的实时推理方法,其中通过低功耗模拟电路实现了深度神经网络(DNN)。尽管模拟实现可能非常紧凑,但它们在很大程度上已被数字设计所取代,部分原因是由于制造引起的器件失配效应。我们提出了一个框架,该框架利用深度学习的功能来弥补这种不匹配,方法是将测量到的设备变化纳入DNN训练过程中作为约束。这消除了使用失配最小化策略(例如使用非常大的晶体管)的麻烦,并使电路复杂性和功耗降至最低。我们基于大规模仿真和VLSI芯片原型实现的结果表明,与当前的数字实现相比,处理效率至少提高了3倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号